引力波天文学是一个充满活力的领域,它利用经典和现代数据处理技术来理解宇宙。已经提出了各种方法来提高检测方案的效率,层次匹配的过滤是一个重要的策略。同时,深度学习方法最近已经证明了与匹配的过滤方法和显着统计性能的一致性。在这项工作中,我们提出了分层检测网络(HDN),这是一种新型的有效检测方法,结合了分层匹配和深度学习的思想。使用新型损失函数对网络进行了训练,该功能同时编码统计准确性和效率的目标。我们讨论了提出的模型的复杂性降低的来源,并描述了专门在不同区域的每个层的初始化的一般配方。我们使用开放的LiGO数据和合成注射的实验证明了HDN的性能,并使用两层型号观察$ 79 \%$ $效率的增益,而匹配的过滤率则以$ 0.2 \%$ $的匹配过滤率。此外,我们展示了如何使用两层模型初始化的三层HDN训练三层HDN可以进一步提高准确性和效率,从而突出了多个简单层在有效检测中的功能。
translated by 谷歌翻译
随着工程系统的复杂性的增长,对自动方法的需求越来越多,可以检测,诊断甚至正确的瞬时异常,这些异常不可避免地会出现,并且可能难以或不可能手动诊断和修复。在我们文明的最敏感和最复杂的系统中,探测器在引力波引起的距离中寻找令人难以置信的很小的变化 - 阿尔伯特·爱因斯坦(Albert Einstein)最初预测的现象是由于黑洞和其他其他碰撞而在宇宙中涌现和传播的探测器。深空中的大量物体。此类探测器的极端复杂性和精度使它们受到瞬时噪声问题的影响,这些问题可能会大大限制其敏感性和有效性。在这项工作中,我们介绍了一种可以检测和表征这种大规模复杂系统的新兴瞬态异常的方法的演示。我们通过一个普遍的问题之一来说明自动化解决方案的性能,精度和适应性,限制重力波发现:陆地质量造影,污染了重力波观测体的高度敏感测量,并可以模仿甚至模仿的天体物理学信号他们正在听。具体而言,我们证明了高度可解释的卷积分类器如何自动学习从辅助探测器数据中检测瞬时异常,而无需观察异常本身。我们还说明了该模型的其他几个有用的功能,包括如何执行自动变量选择,以将数万个辅助数据渠道降低到只有几个相关的数据渠道;它如何识别这些通道中异常情况的行为特征;以及如何使用它来研究单个异常及其相关的渠道。
translated by 谷歌翻译
随着我们感知增强的能力,我们正在经历从数据贫困问题的过渡,其中中心问题是缺乏相关数据,即数据越来越多的问题,其中核心问题是确定一个中的一些相关功能海洋观察。通过在重力波天体物理学中应用的激励,我们研究了从检测器及其环境中丰富的测量值收集的引力波检测器中瞬时噪声伪影的存在。我们认为,功能学习 - 从数据中优化了哪些相关功能 - 对于实现高精度至关重要。我们引入的模型将错误率降低60%以上,而不是先前使用固定的手工制作功能的最新现状。功能学习不仅有用,因为它可以提高预测任务的性能;结果提供了有关与感兴趣现象相关的模式的宝贵信息,否则这些现象将是无法发现的。在我们的应用程序中,发现与瞬态噪声相关的功能提供了有关其起源的诊断信息,并建议缓解策略。在高维环境中学习具有挑战性。通过使用各种体系结构的实验,我们确定了成功模型中的两个关键因素:稀疏性,用于在高维观测中选择相关变量;和深度,这赋予了处理复杂相互作用和相对于时间变化的鲁棒性的灵活性。我们通过对实际检测器数据进行系统的实验来说明它们的意义。我们的结果提供了对机器学习社区中常见假设的实验性佐证,并具有直接适用于提高我们感知引力波的能力以及许多其他具有类似高维,嘈杂或部分无关数据的问题的问题。
translated by 谷歌翻译
这项工作试图提供一种合理的理论框架,旨在从数据压缩和歧视性代表的原则解释现代深度(卷积)网络。我们认为,对于高维多类数据,最佳线性判别表示最大化整个数据集之间的编码率差和所有子集的平均值。我们表明,用于优化速率降低目标的基本迭代梯度上升方案自然地导致了一个名为Redunet的多层深网络,其共享现代深度网络的共同特征。深度分层架构,线性和非线性操作员,甚至网络的甚至参数都通过正向传播明确地构造了逐层构造,尽管它们通过背部传播可用于微调。所获得的“白盒”网络的所有组件都具有精确的优化,统计和几何解释。此外,当我们强制执行分类时,所以,所以网络的所有线性运算符自然地变为多通道卷曲。不变设置中的推导表明稀疏性和不变性之间的折衷,并且还表明这种深度卷积网络在光谱域中构建和学习的显着更有效。我们的初步模拟和实验清楚地验证了速率降低目标和相关的Redunet的有效性。所有代码和数据都可用于\ url {https://github.com/ma-lab-berkeley}。
translated by 谷歌翻译
随着科学和工程的越来越多的数据驱动,优化的作用已经扩展到几乎触及数据分析管道的每个阶段,从信号和数据获取到建模和预测。实践中遇到的优化问题通常是非convex。尽管挑战因问题而异,但非概念性的一个共同来源是数据或测量模型中的非线性。非线性模型通常表现出对称性,创建具有多种等效解决方案的复杂,非凸客观的景观。然而,简单的方法(例如,梯度下降)在实践中通常表现出色。这项调查的目的是突出一类可进行的非概念问题,可以通过对称性的镜头来理解。这些问题表现出特征性的几何结构:局部最小化是单个“地面真实”解决方案的对称副本,而其他关键点出现在地面真理的对称副本的平衡叠加上,并在破坏对称性的方向上表现出负曲率。该结构使有效的方法获得了全局最小化。我们讨论了由于成像,信号处理和数据分析中广泛的问题而引起的这种现象的示例。我们强调了对称性在塑造客观景观中的关键作用,并讨论旋转和离散对称性的不同作用。该区域充满了观察到的现象和开放问题。我们通过强调未来研究的方向结束。
translated by 谷歌翻译
This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the 1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.
translated by 谷歌翻译
We demonstrate a proof-of-concept of a large language model conducting corporate lobbying related activities. We use an autoregressive large language model (OpenAI's text-davinci-003) to determine if proposed U.S. Congressional bills are relevant to specific public companies and provide explanations and confidence levels. For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation. We use hundreds of ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model, which outperforms the baseline of predicting the most common outcome of irrelevance. However, we test the ability to determine the relevance of a bill with the previous OpenAI GPT-3 model (text-davinci-002), which was state-of-the-art on many language tasks until text-davinci-003 was released on November 28, 2022. The performance of text-davinci-002 is worse than simply always predicting that a bill is irrelevant to a company. These results suggest that, as large language models continue to improve core natural language understanding capabilities, performance on corporate lobbying related tasks will continue to improve. We then discuss why this could be problematic for societal-AI alignment.
translated by 谷歌翻译
Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful representations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
In this paper we derive a PAC-Bayesian-Like error bound for a class of stochastic dynamical systems with inputs, namely, for linear time-invariant stochastic state-space models (stochastic LTI systems for short). This class of systems is widely used in control engineering and econometrics, in particular, they represent a special case of recurrent neural networks. In this paper we 1) formalize the learning problem for stochastic LTI systems with inputs, 2) derive a PAC-Bayesian-Like error bound for such systems, 3) discuss various consequences of this error bound.
translated by 谷歌翻译